Face Generation

In this project, you'll use generative adversarial networks to generate new images of faces.

Get the Data

You'll be using two datasets in this project:

  • MNIST
  • CelebA

Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.

If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".

In [1]:
data_dir = './data'

# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
data_dir = '/input'


"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

#helper.download_extract('mnist', data_dir)
#helper.download_extract('celeba', data_dir)

Explore the Data

MNIST

As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.

In [2]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot

mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
Out[2]:
<matplotlib.image.AxesImage at 0x7f7e80ffdb00>

CelebA

The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.

In [3]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Out[3]:
<matplotlib.image.AxesImage at 0x7f7e80f1d198>

Preprocess the Data

Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.

The MNIST images are black and white images with a single color channel while the CelebA images have 3 color channels (RGB color channel).

Build the Neural Network

You'll build the components necessary to build a GANs by implementing the following functions below:

  • model_inputs
  • discriminator
  • generator
  • model_loss
  • model_opt
  • train

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU

In [4]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer.  You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
TensorFlow Version: 1.1.0
Default GPU Device: /gpu:0

Input

Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
  • Z input placeholder with rank 2 using z_dim.
  • Learning rate placeholder with rank 0.

Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)

In [5]:
import problem_unittests as tests

def model_inputs(image_width, image_height, image_channels, z_dim):
    """
    Create the model inputs
    :param image_width: The input image width
    :param image_height: The input image height
    :param image_channels: The number of image channels
    :param z_dim: The dimension of Z
    :return: Tuple of (tensor of real input images, tensor of z data, learning rate)
    """
    # TODO: Implement Function
    inputs_real = tf.placeholder(tf.float32, (None, image_width, image_height, image_channels), name='input_real')
    inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
    learning_rate = tf.placeholder(tf.float32, name='learing_rate')
    return inputs_real, inputs_z, learning_rate


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
Tests Passed

Discriminator

Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).

In [6]:
def discriminator(images, reuse=False, alpha=0.01):
    """
    Create the discriminator network
    :param images: Tensor of input image(s)
    :param reuse: Boolean if the weights should be reused
    :return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
    """
    # TODO: Implement Function
    with tf.variable_scope('discriminator', reuse=reuse):
        x1 = tf.layers.conv2d(images, 64, 5, strides=2, padding='same', kernel_initializer=tf.contrib.layers.xavier_initializer())
        relu1 = tf.maximum(alpha * x1, x1)
        
        x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same', kernel_initializer=tf.contrib.layers.xavier_initializer())
        x2 = tf.layers.batch_normalization(x2, training=True)
        relu2 = tf.maximum(alpha * x2, x2)
        
        x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same', kernel_initializer=tf.contrib.layers.xavier_initializer())
        x3 = tf.layers.batch_normalization(x3, training=True)
        relu3 = tf.maximum(alpha * x3, x3)
        
        flats = tf.reshape(relu3, (-1, 4*4*256))
        logits = tf.layers.dense(flats, 1)
        out = tf.sigmoid(logits)

    return out, logits


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)
Tests Passed

Generator

Implement generator to generate an image using z. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.

In [7]:
def generator(z, out_channel_dim, is_train=True, alpha=0.01):
    """
    Create the generator network
    :param z: Input z
    :param out_channel_dim: The number of channels in the output image
    :param is_train: Boolean if generator is being used for training
    :return: The tensor output of the generator
    """
    # TODO: Implement Function
    with tf.variable_scope('generator', reuse=not is_train):
        x1 = tf.layers.dense(z, 7*7*512)
        
        x1 = tf.reshape(x1, (-1, 7, 7, 512))
        x1 = tf.layers.batch_normalization(x1, training=is_train)
        relu1 = tf.maximum(alpha * x1, x1)
        
        x2 = tf.layers.conv2d_transpose(relu1, 256, 5, strides=2, padding='same', kernel_initializer=tf.contrib.layers.xavier_initializer())
        x2 = tf.layers.batch_normalization(x2, training=is_train)
        relu2 = tf.maximum(alpha * x2, x2)
        
        x3 = tf.layers.conv2d_transpose(relu2, 128, 5, strides=2, padding='same', kernel_initializer=tf.contrib.layers.xavier_initializer())
        x3 = tf.layers.batch_normalization(x3, training=is_train)
        relu3 = tf.maximum(alpha * x3, x3)
        
        logits = tf.layers.conv2d_transpose(relu3, out_channel_dim, 5, strides=1, padding='same', kernel_initializer=tf.contrib.layers.xavier_initializer())
        out = tf.tanh(logits)
    
    return out


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)
Tests Passed

Loss

Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:

  • discriminator(images, reuse=False)
  • generator(z, out_channel_dim, is_train=True)
In [8]:
def model_loss(input_real, input_z, out_channel_dim, alpha=0.01):
    """
    Get the loss for the discriminator and generator
    :param input_real: Images from the real dataset
    :param input_z: Z input
    :param out_channel_dim: The number of channels in the output image
    :return: A tuple of (discriminator loss, generator loss)
    """
    # TODO: Implement Function
    g_model = generator(input_z, out_channel_dim, is_train=True, alpha=alpha)
    
    d_model_real, d_logits_real = discriminator(input_real, reuse=False, alpha=alpha)
    d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
    
    d_loss_real = tf.reduce_mean(
        tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real) * 0.9))
    d_loss_fake = tf.reduce_mean(
        tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
    g_loss = tf.reduce_mean(
        tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))

    d_loss = d_loss_real + d_loss_fake
    
    return d_loss, g_loss


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_loss(model_loss)
Tests Passed

Optimization

Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).

In [9]:
def model_opt(d_loss, g_loss, learning_rate, beta1):
    """
    Get optimization operations
    :param d_loss: Discriminator loss Tensor
    :param g_loss: Generator loss Tensor
    :param learning_rate: Learning Rate Placeholder
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :return: A tuple of (discriminator training operation, generator training operation)
    """
    # TODO: Implement Function
    t_vars = tf.trainable_variables()
    g_vars = [var for var in t_vars if var.name.startswith('generator')]
    d_vars = [var for var in t_vars if var.name.startswith('discriminator')]

    with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
        d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
        g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
    
    return d_train_opt, g_train_opt


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)
Tests Passed

Neural Network Training

Show Output

Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.

In [10]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
    """
    Show example output for the generator
    :param sess: TensorFlow session
    :param n_images: Number of Images to display
    :param input_z: Input Z Tensor
    :param out_channel_dim: The number of channels in the output image
    :param image_mode: The mode to use for images ("RGB" or "L")
    """
    cmap = None if image_mode == 'RGB' else 'gray'
    z_dim = input_z.get_shape().as_list()[-1]
    example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])

    samples = sess.run(
        generator(input_z, out_channel_dim, False),
        feed_dict={input_z: example_z})

    images_grid = helper.images_square_grid(samples, image_mode)
    pyplot.imshow(images_grid, cmap=cmap)
    pyplot.show()

Train

Implement train to build and train the GANs. Use the following functions you implemented:

  • model_inputs(image_width, image_height, image_channels, z_dim)
  • model_loss(input_real, input_z, out_channel_dim)
  • model_opt(d_loss, g_loss, learning_rate, beta1)

Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.

In [11]:
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode, alpha=0.01):
    """
    Train the GAN
    :param epoch_count: Number of epochs
    :param batch_size: Batch Size
    :param z_dim: Z dimension
    :param learning_rate: Learning Rate
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :param get_batches: Function to get batches
    :param data_shape: Shape of the data
    :param data_image_mode: The image mode to use for images ("RGB" or "L")
    """
    # TODO: Build Model
    samples, losses = [], []
    width = data_shape[1]
    height = data_shape[2]
    image_channels = data_shape[3]
    
    input_real, input_z, lr = model_inputs(width, height, image_channels , z_dim)
    d_loss, g_loss = model_loss(input_real, input_z, image_channels)
    d_opt, g_opt = model_opt(d_loss, g_loss, lr, beta1)
    
    steps = 0
    print_every = 50
    show_every = 100
    steps_per_epoch = data_shape[0]//batch_size
    
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for epoch_i in range(epoch_count):
            for batch_images in get_batches(batch_size):
                # TODO: Train Model
                steps += 1
                # Re-scale images
                batch_images = batch_images * 2
                
                # Sample random noise for G
                batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim))
                
                # Run optimizers
                _ = sess.run(d_opt, feed_dict={input_real: batch_images, input_z: batch_z, lr: learning_rate})
                _ = sess.run(g_opt, feed_dict={input_real: batch_images, input_z: batch_z, lr: learning_rate})

                if steps % print_every == 0:
                    # At the end of each epoch, get the losses and print them out
                    train_loss_d = d_loss.eval({input_real: batch_images, input_z: batch_z, lr: learning_rate})
                    train_loss_g = g_loss.eval({input_z: batch_z})
                    losses.append((train_loss_d, train_loss_g))

                    print("Epoch {}/{}...".format(epoch_i+1, epochs),
                          "Step {}/{}...".format(steps - (epoch_i * steps_per_epoch), steps_per_epoch),
                          "Discriminator Loss: {:.4f}...".format(train_loss_d),
                          "Generator Loss: {:.4f}".format(train_loss_g))

                if steps % show_every == 0:
                    show_generator_output(sess, 25, input_z, image_channels, data_image_mode)
    
    return losses

MNIST

Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.

In [12]:
batch_size = 32
z_dim = 100
learning_rate = 0.001
beta1 = 0.5
alpha = 0.01


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2

mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
    mnist_losses = train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
                         mnist_dataset.shape, mnist_dataset.image_mode, alpha=alpha)
Epoch 1/2... Step 50/1875... Discriminator Loss: 1.5146... Generator Loss: 3.5086
Epoch 1/2... Step 100/1875... Discriminator Loss: 0.3621... Generator Loss: 4.3113
Epoch 1/2... Step 150/1875... Discriminator Loss: 2.6770... Generator Loss: 3.5480
Epoch 1/2... Step 200/1875... Discriminator Loss: 1.3119... Generator Loss: 0.6228
Epoch 1/2... Step 250/1875... Discriminator Loss: 0.5444... Generator Loss: 1.9213
Epoch 1/2... Step 300/1875... Discriminator Loss: 1.2879... Generator Loss: 0.7132
Epoch 1/2... Step 350/1875... Discriminator Loss: 1.1575... Generator Loss: 0.9625
Epoch 1/2... Step 400/1875... Discriminator Loss: 1.5298... Generator Loss: 1.6198
Epoch 1/2... Step 450/1875... Discriminator Loss: 1.2077... Generator Loss: 0.8454
Epoch 1/2... Step 500/1875... Discriminator Loss: 1.3725... Generator Loss: 0.5378
Epoch 1/2... Step 550/1875... Discriminator Loss: 1.3776... Generator Loss: 0.5708
Epoch 1/2... Step 600/1875... Discriminator Loss: 1.8460... Generator Loss: 3.2561
Epoch 1/2... Step 650/1875... Discriminator Loss: 1.2051... Generator Loss: 0.6740
Epoch 1/2... Step 700/1875... Discriminator Loss: 1.1528... Generator Loss: 1.1965
Epoch 1/2... Step 750/1875... Discriminator Loss: 1.4709... Generator Loss: 0.4740
Epoch 1/2... Step 800/1875... Discriminator Loss: 1.3552... Generator Loss: 2.0337
Epoch 1/2... Step 850/1875... Discriminator Loss: 1.0481... Generator Loss: 1.2638
Epoch 1/2... Step 900/1875... Discriminator Loss: 1.7522... Generator Loss: 0.3551
Epoch 1/2... Step 950/1875... Discriminator Loss: 1.1194... Generator Loss: 0.9853
Epoch 1/2... Step 1000/1875... Discriminator Loss: 1.1952... Generator Loss: 0.6765
Epoch 1/2... Step 1050/1875... Discriminator Loss: 1.1302... Generator Loss: 1.5288
Epoch 1/2... Step 1100/1875... Discriminator Loss: 1.2606... Generator Loss: 0.8304
Epoch 1/2... Step 1150/1875... Discriminator Loss: 1.2649... Generator Loss: 0.6812
Epoch 1/2... Step 1200/1875... Discriminator Loss: 1.1598... Generator Loss: 0.6787
Epoch 1/2... Step 1250/1875... Discriminator Loss: 1.1847... Generator Loss: 0.7612
Epoch 1/2... Step 1300/1875... Discriminator Loss: 1.2543... Generator Loss: 0.6532
Epoch 1/2... Step 1350/1875... Discriminator Loss: 1.1656... Generator Loss: 1.5224
Epoch 1/2... Step 1400/1875... Discriminator Loss: 1.1292... Generator Loss: 0.7532
Epoch 1/2... Step 1450/1875... Discriminator Loss: 1.5637... Generator Loss: 0.4471
Epoch 1/2... Step 1500/1875... Discriminator Loss: 1.0022... Generator Loss: 1.0033
Epoch 1/2... Step 1550/1875... Discriminator Loss: 1.2304... Generator Loss: 0.6634
Epoch 1/2... Step 1600/1875... Discriminator Loss: 0.9729... Generator Loss: 0.9867
Epoch 1/2... Step 1650/1875... Discriminator Loss: 1.0780... Generator Loss: 1.0959
Epoch 1/2... Step 1700/1875... Discriminator Loss: 1.0069... Generator Loss: 0.9877
Epoch 1/2... Step 1750/1875... Discriminator Loss: 1.5074... Generator Loss: 0.4630
Epoch 1/2... Step 1800/1875... Discriminator Loss: 1.2150... Generator Loss: 0.7731
Epoch 1/2... Step 1850/1875... Discriminator Loss: 0.9287... Generator Loss: 1.6630
Epoch 2/2... Step 25/1875... Discriminator Loss: 0.7996... Generator Loss: 1.4406
Epoch 2/2... Step 75/1875... Discriminator Loss: 0.8566... Generator Loss: 1.1944
Epoch 2/2... Step 125/1875... Discriminator Loss: 0.7400... Generator Loss: 2.1145
Epoch 2/2... Step 175/1875... Discriminator Loss: 1.5413... Generator Loss: 0.4683
Epoch 2/2... Step 225/1875... Discriminator Loss: 1.1037... Generator Loss: 0.7981
Epoch 2/2... Step 275/1875... Discriminator Loss: 0.8578... Generator Loss: 1.2143
Epoch 2/2... Step 325/1875... Discriminator Loss: 0.9925... Generator Loss: 1.3923
Epoch 2/2... Step 375/1875... Discriminator Loss: 0.7962... Generator Loss: 1.3170
Epoch 2/2... Step 425/1875... Discriminator Loss: 1.4080... Generator Loss: 0.5292
Epoch 2/2... Step 475/1875... Discriminator Loss: 1.1027... Generator Loss: 1.0361
Epoch 2/2... Step 525/1875... Discriminator Loss: 1.5696... Generator Loss: 0.5580
Epoch 2/2... Step 575/1875... Discriminator Loss: 1.2449... Generator Loss: 0.8172
Epoch 2/2... Step 625/1875... Discriminator Loss: 0.8600... Generator Loss: 1.2685
Epoch 2/2... Step 675/1875... Discriminator Loss: 0.8191... Generator Loss: 1.6978
Epoch 2/2... Step 725/1875... Discriminator Loss: 0.8815... Generator Loss: 2.1814
Epoch 2/2... Step 775/1875... Discriminator Loss: 2.6126... Generator Loss: 0.2097
Epoch 2/2... Step 825/1875... Discriminator Loss: 0.8035... Generator Loss: 1.3980
Epoch 2/2... Step 875/1875... Discriminator Loss: 0.9395... Generator Loss: 2.1806
Epoch 2/2... Step 925/1875... Discriminator Loss: 1.8538... Generator Loss: 1.6070
Epoch 2/2... Step 975/1875... Discriminator Loss: 0.7516... Generator Loss: 1.9911
Epoch 2/2... Step 1025/1875... Discriminator Loss: 0.7550... Generator Loss: 1.5107
Epoch 2/2... Step 1075/1875... Discriminator Loss: 1.3853... Generator Loss: 0.5965
Epoch 2/2... Step 1125/1875... Discriminator Loss: 0.7791... Generator Loss: 1.4517
Epoch 2/2... Step 1175/1875... Discriminator Loss: 1.4960... Generator Loss: 0.6604
Epoch 2/2... Step 1225/1875... Discriminator Loss: 0.7695... Generator Loss: 1.3259
Epoch 2/2... Step 1275/1875... Discriminator Loss: 1.1735... Generator Loss: 0.7946
Epoch 2/2... Step 1325/1875... Discriminator Loss: 0.6584... Generator Loss: 1.8154
Epoch 2/2... Step 1375/1875... Discriminator Loss: 0.8374... Generator Loss: 1.2723
Epoch 2/2... Step 1425/1875... Discriminator Loss: 0.9058... Generator Loss: 1.2972
Epoch 2/2... Step 1475/1875... Discriminator Loss: 0.7770... Generator Loss: 2.2283
Epoch 2/2... Step 1525/1875... Discriminator Loss: 1.0532... Generator Loss: 1.0841
Epoch 2/2... Step 1575/1875... Discriminator Loss: 0.9171... Generator Loss: 1.0564
Epoch 2/2... Step 1625/1875... Discriminator Loss: 0.7388... Generator Loss: 1.3995
Epoch 2/2... Step 1675/1875... Discriminator Loss: 1.3349... Generator Loss: 0.6768
Epoch 2/2... Step 1725/1875... Discriminator Loss: 1.0547... Generator Loss: 0.9267
Epoch 2/2... Step 1775/1875... Discriminator Loss: 1.1343... Generator Loss: 0.9639
Epoch 2/2... Step 1825/1875... Discriminator Loss: 0.9388... Generator Loss: 1.1780
Epoch 2/2... Step 1875/1875... Discriminator Loss: 0.6965... Generator Loss: 1.5010

CelebA

Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.

In [13]:
batch_size = 32
z_dim = 100
learning_rate = 0.0002
beta1 = 0.5
alpha = 0.01


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1

celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
    celeba_losses = train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
                          celeba_dataset.shape, celeba_dataset.image_mode, alpha=alpha)
Epoch 1/1... Step 50/6331... Discriminator Loss: 1.0274... Generator Loss: 2.2897
Epoch 1/1... Step 100/6331... Discriminator Loss: 0.5637... Generator Loss: 2.6931
Epoch 1/1... Step 150/6331... Discriminator Loss: 0.6480... Generator Loss: 4.8530
Epoch 1/1... Step 200/6331... Discriminator Loss: 0.8694... Generator Loss: 2.8775
Epoch 1/1... Step 250/6331... Discriminator Loss: 0.5886... Generator Loss: 4.3368
Epoch 1/1... Step 300/6331... Discriminator Loss: 0.4639... Generator Loss: 2.7285
Epoch 1/1... Step 350/6331... Discriminator Loss: 0.5136... Generator Loss: 2.6670
Epoch 1/1... Step 400/6331... Discriminator Loss: 0.3889... Generator Loss: 6.3614
Epoch 1/1... Step 450/6331... Discriminator Loss: 1.2882... Generator Loss: 0.6344
Epoch 1/1... Step 500/6331... Discriminator Loss: 0.9374... Generator Loss: 1.0421
Epoch 1/1... Step 550/6331... Discriminator Loss: 1.3760... Generator Loss: 0.5875
Epoch 1/1... Step 600/6331... Discriminator Loss: 1.9401... Generator Loss: 2.8281
Epoch 1/1... Step 650/6331... Discriminator Loss: 0.6412... Generator Loss: 2.5504
Epoch 1/1... Step 700/6331... Discriminator Loss: 1.1531... Generator Loss: 0.8998
Epoch 1/1... Step 750/6331... Discriminator Loss: 0.9196... Generator Loss: 1.0935
Epoch 1/1... Step 800/6331... Discriminator Loss: 1.0268... Generator Loss: 0.9133
Epoch 1/1... Step 850/6331... Discriminator Loss: 1.3202... Generator Loss: 1.4815
Epoch 1/1... Step 900/6331... Discriminator Loss: 1.1343... Generator Loss: 0.8559
Epoch 1/1... Step 950/6331... Discriminator Loss: 0.9773... Generator Loss: 2.4833
Epoch 1/1... Step 1000/6331... Discriminator Loss: 0.9865... Generator Loss: 1.0480
Epoch 1/1... Step 1050/6331... Discriminator Loss: 0.5591... Generator Loss: 2.2930
Epoch 1/1... Step 1100/6331... Discriminator Loss: 1.4260... Generator Loss: 0.6698
Epoch 1/1... Step 1150/6331... Discriminator Loss: 1.0344... Generator Loss: 0.8678
Epoch 1/1... Step 1200/6331... Discriminator Loss: 0.9927... Generator Loss: 0.9915
Epoch 1/1... Step 1250/6331... Discriminator Loss: 0.7798... Generator Loss: 1.8227
Epoch 1/1... Step 1300/6331... Discriminator Loss: 1.0257... Generator Loss: 0.8978
Epoch 1/1... Step 1350/6331... Discriminator Loss: 0.7610... Generator Loss: 1.3708
Epoch 1/1... Step 1400/6331... Discriminator Loss: 1.1769... Generator Loss: 0.8335
Epoch 1/1... Step 1450/6331... Discriminator Loss: 1.1708... Generator Loss: 2.7447
Epoch 1/1... Step 1500/6331... Discriminator Loss: 1.3052... Generator Loss: 0.5812
Epoch 1/1... Step 1550/6331... Discriminator Loss: 0.7653... Generator Loss: 1.6459
Epoch 1/1... Step 1600/6331... Discriminator Loss: 1.0554... Generator Loss: 0.8406
Epoch 1/1... Step 1650/6331... Discriminator Loss: 1.4424... Generator Loss: 0.4664
Epoch 1/1... Step 1700/6331... Discriminator Loss: 1.6358... Generator Loss: 0.3760
Epoch 1/1... Step 1750/6331... Discriminator Loss: 0.7283... Generator Loss: 1.4175
Epoch 1/1... Step 1800/6331... Discriminator Loss: 1.2711... Generator Loss: 0.6257
Epoch 1/1... Step 1850/6331... Discriminator Loss: 1.5585... Generator Loss: 0.4016
Epoch 1/1... Step 1900/6331... Discriminator Loss: 0.8430... Generator Loss: 1.2847
Epoch 1/1... Step 1950/6331... Discriminator Loss: 0.6976... Generator Loss: 1.5261
Epoch 1/1... Step 2000/6331... Discriminator Loss: 0.6017... Generator Loss: 1.7277
Epoch 1/1... Step 2050/6331... Discriminator Loss: 1.0046... Generator Loss: 0.9073
Epoch 1/1... Step 2100/6331... Discriminator Loss: 0.8979... Generator Loss: 3.1405
Epoch 1/1... Step 2150/6331... Discriminator Loss: 1.0295... Generator Loss: 0.8554
Epoch 1/1... Step 2200/6331... Discriminator Loss: 0.9178... Generator Loss: 1.4566
Epoch 1/1... Step 2250/6331... Discriminator Loss: 0.7209... Generator Loss: 1.9805
Epoch 1/1... Step 2300/6331... Discriminator Loss: 0.6444... Generator Loss: 1.8168
Epoch 1/1... Step 2350/6331... Discriminator Loss: 2.0065... Generator Loss: 0.2583
Epoch 1/1... Step 2400/6331... Discriminator Loss: 0.9122... Generator Loss: 1.0613
Epoch 1/1... Step 2450/6331... Discriminator Loss: 0.6027... Generator Loss: 2.1696
Epoch 1/1... Step 2500/6331... Discriminator Loss: 0.6716... Generator Loss: 1.5773
Epoch 1/1... Step 2550/6331... Discriminator Loss: 1.1006... Generator Loss: 0.7136
Epoch 1/1... Step 2600/6331... Discriminator Loss: 0.8908... Generator Loss: 1.5734
Epoch 1/1... Step 2650/6331... Discriminator Loss: 1.0604... Generator Loss: 0.8805
Epoch 1/1... Step 2700/6331... Discriminator Loss: 0.9288... Generator Loss: 1.3277
Epoch 1/1... Step 2750/6331... Discriminator Loss: 0.7470... Generator Loss: 1.6496
Epoch 1/1... Step 2800/6331... Discriminator Loss: 0.6461... Generator Loss: 2.7800
Epoch 1/1... Step 2850/6331... Discriminator Loss: 1.1779... Generator Loss: 0.7112
Epoch 1/1... Step 2900/6331... Discriminator Loss: 1.0067... Generator Loss: 1.3405
Epoch 1/1... Step 2950/6331... Discriminator Loss: 1.0043... Generator Loss: 1.0975
Epoch 1/1... Step 3000/6331... Discriminator Loss: 1.3124... Generator Loss: 0.6021
Epoch 1/1... Step 3050/6331... Discriminator Loss: 1.0664... Generator Loss: 0.8122
Epoch 1/1... Step 3100/6331... Discriminator Loss: 0.9632... Generator Loss: 0.9619
Epoch 1/1... Step 3150/6331... Discriminator Loss: 1.3692... Generator Loss: 2.6469
Epoch 1/1... Step 3200/6331... Discriminator Loss: 0.7763... Generator Loss: 1.6615
Epoch 1/1... Step 3250/6331... Discriminator Loss: 0.7865... Generator Loss: 1.6225
Epoch 1/1... Step 3300/6331... Discriminator Loss: 1.3251... Generator Loss: 0.7401
Epoch 1/1... Step 3350/6331... Discriminator Loss: 0.9847... Generator Loss: 0.9611
Epoch 1/1... Step 3400/6331... Discriminator Loss: 1.0720... Generator Loss: 0.9905
Epoch 1/1... Step 3450/6331... Discriminator Loss: 0.9718... Generator Loss: 1.0133
Epoch 1/1... Step 3500/6331... Discriminator Loss: 0.6627... Generator Loss: 1.6287
Epoch 1/1... Step 3550/6331... Discriminator Loss: 1.4199... Generator Loss: 0.4828
Epoch 1/1... Step 3600/6331... Discriminator Loss: 1.4124... Generator Loss: 0.5015
Epoch 1/1... Step 3650/6331... Discriminator Loss: 0.8533... Generator Loss: 2.0296
Epoch 1/1... Step 3700/6331... Discriminator Loss: 1.1941... Generator Loss: 0.7124
Epoch 1/1... Step 3750/6331... Discriminator Loss: 1.3853... Generator Loss: 0.5775
Epoch 1/1... Step 3800/6331... Discriminator Loss: 1.0144... Generator Loss: 0.9669
Epoch 1/1... Step 3850/6331... Discriminator Loss: 1.1440... Generator Loss: 0.8314
Epoch 1/1... Step 3900/6331... Discriminator Loss: 1.1247... Generator Loss: 0.9967
Epoch 1/1... Step 3950/6331... Discriminator Loss: 1.3628... Generator Loss: 1.1779
Epoch 1/1... Step 4000/6331... Discriminator Loss: 1.4126... Generator Loss: 0.4925
Epoch 1/1... Step 4050/6331... Discriminator Loss: 1.3000... Generator Loss: 0.6056
Epoch 1/1... Step 4100/6331... Discriminator Loss: 1.0790... Generator Loss: 1.0250
Epoch 1/1... Step 4150/6331... Discriminator Loss: 1.3936... Generator Loss: 0.5453
Epoch 1/1... Step 4200/6331... Discriminator Loss: 1.2832... Generator Loss: 0.6087
Epoch 1/1... Step 4250/6331... Discriminator Loss: 1.2503... Generator Loss: 0.8378
Epoch 1/1... Step 4300/6331... Discriminator Loss: 0.9888... Generator Loss: 0.9278
Epoch 1/1... Step 4350/6331... Discriminator Loss: 1.4178... Generator Loss: 0.4790
Epoch 1/1... Step 4400/6331... Discriminator Loss: 1.6517... Generator Loss: 0.4177
Epoch 1/1... Step 4450/6331... Discriminator Loss: 1.4379... Generator Loss: 0.5870
Epoch 1/1... Step 4500/6331... Discriminator Loss: 1.4625... Generator Loss: 0.6607
Epoch 1/1... Step 4550/6331... Discriminator Loss: 1.1743... Generator Loss: 0.7058
Epoch 1/1... Step 4600/6331... Discriminator Loss: 1.0380... Generator Loss: 1.0453
Epoch 1/1... Step 4650/6331... Discriminator Loss: 1.2662... Generator Loss: 0.7874
Epoch 1/1... Step 4700/6331... Discriminator Loss: 1.3956... Generator Loss: 0.6094
Epoch 1/1... Step 4750/6331... Discriminator Loss: 1.3392... Generator Loss: 0.5527
Epoch 1/1... Step 4800/6331... Discriminator Loss: 0.9403... Generator Loss: 1.0520
Epoch 1/1... Step 4850/6331... Discriminator Loss: 1.1841... Generator Loss: 0.6634
Epoch 1/1... Step 4900/6331... Discriminator Loss: 1.1913... Generator Loss: 0.8325
Epoch 1/1... Step 4950/6331... Discriminator Loss: 1.2856... Generator Loss: 0.7245
Epoch 1/1... Step 5000/6331... Discriminator Loss: 1.2823... Generator Loss: 0.7358
Epoch 1/1... Step 5050/6331... Discriminator Loss: 1.2348... Generator Loss: 0.8137
Epoch 1/1... Step 5100/6331... Discriminator Loss: 1.0690... Generator Loss: 1.0857
Epoch 1/1... Step 5150/6331... Discriminator Loss: 1.7575... Generator Loss: 0.3697
Epoch 1/1... Step 5200/6331... Discriminator Loss: 1.1731... Generator Loss: 0.7424
Epoch 1/1... Step 5250/6331... Discriminator Loss: 1.0448... Generator Loss: 0.9030
Epoch 1/1... Step 5300/6331... Discriminator Loss: 1.1271... Generator Loss: 0.9513
Epoch 1/1... Step 5350/6331... Discriminator Loss: 1.0584... Generator Loss: 1.0845
Epoch 1/1... Step 5400/6331... Discriminator Loss: 1.3250... Generator Loss: 0.7409
Epoch 1/1... Step 5450/6331... Discriminator Loss: 1.5151... Generator Loss: 0.5489
Epoch 1/1... Step 5500/6331... Discriminator Loss: 1.3380... Generator Loss: 0.6513
Epoch 1/1... Step 5550/6331... Discriminator Loss: 1.2112... Generator Loss: 0.7415
Epoch 1/1... Step 5600/6331... Discriminator Loss: 1.2472... Generator Loss: 0.7167
Epoch 1/1... Step 5650/6331... Discriminator Loss: 1.3949... Generator Loss: 0.5530
Epoch 1/1... Step 5700/6331... Discriminator Loss: 1.2552... Generator Loss: 0.6128
Epoch 1/1... Step 5750/6331... Discriminator Loss: 1.5168... Generator Loss: 0.4693
Epoch 1/1... Step 5800/6331... Discriminator Loss: 1.2897... Generator Loss: 0.6490
Epoch 1/1... Step 5850/6331... Discriminator Loss: 1.3479... Generator Loss: 0.5596
Epoch 1/1... Step 5900/6331... Discriminator Loss: 1.5828... Generator Loss: 0.4147
Epoch 1/1... Step 5950/6331... Discriminator Loss: 1.3462... Generator Loss: 0.6539
Epoch 1/1... Step 6000/6331... Discriminator Loss: 1.3469... Generator Loss: 0.7144
Epoch 1/1... Step 6050/6331... Discriminator Loss: 1.4655... Generator Loss: 0.4938
Epoch 1/1... Step 6100/6331... Discriminator Loss: 1.2977... Generator Loss: 0.7434
Epoch 1/1... Step 6150/6331... Discriminator Loss: 1.3225... Generator Loss: 0.6215
Epoch 1/1... Step 6200/6331... Discriminator Loss: 1.5132... Generator Loss: 0.5627
Epoch 1/1... Step 6250/6331... Discriminator Loss: 1.3740... Generator Loss: 0.5218
Epoch 1/1... Step 6300/6331... Discriminator Loss: 1.2334... Generator Loss: 0.6946

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.

In [ ]: